Loading Discriminative Feature Representations in Hidden Layer

نویسندگان

  • Daw-Ran Liou
  • Yang-En Chen
  • Cheng-Yuan Liou
چکیده

This work explores the neural features that are trained by decreasing a discriminative energy. It directly resolves the unfaithful representation problem and the ambiguous internal representation problem in various backpropagation training algorithms for MLP. It also indirectly overcomes the premature saturation problem. Keywords—Multilayer perceptron; deep learning; Boltzmann machine; ambiguous internal representation; unfaithful representation; classification; image restoration

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bio-inspired Multi-layer Spiking Neural Network Extracts Discriminative Features from Speech Signals

Spiking neural networks (SNNs) enable power-efficient implementations due to their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN that uses unsupervised learning to extract discriminative features from speech signals, which can subsequently be used in a classifier. The architecture consists of a spiking convolutional/pooling layer followed by a fully connected spiking...

متن کامل

Deep Dynamic Models for Learning Hidden Representations of Speech Features

Deep hierarchical structure with multiple layers of hidden space in human speech is intrinsically connected to its dynamic characteristics manifested in all levels of speech production and perception. The desire and an attempt to capitalize on a (superficial) understanding of this deep speech structure helped ignite the recent surge of interest in the deep learning approach to speech recognitio...

متن کامل

Sparse Deep Nonnegative Matrix Factorization

Nonnegative matrix factorization is a powerful technique to realize dimension reduction and pattern recognition through single-layer data representation learning. Deep learning, however, with its carefully designed hierarchical structure, is able to combine hidden features to form more representative features for pattern recognition. In this paper, we proposed sparse deep nonnegative matrix fac...

متن کامل

Discriminative MLPs in HMM-based recognition of speech in cellular telephony

Deviating from the conventional Hidden Markov ModelMulti-Layer Perceptron (HMM-MLP) hybrid paradigm of using MLP for classi cation, the proposed discriminative MLP technique uses MLP as a mapping module for feature extraction for conventional HMM-based systems. The MLP is discriminatively trained on the phonetically labeled training data to generate the phoneme posterior probabilities. We achie...

متن کامل

NDDR-CNN: Layer-wise Feature Fusing in Multi-Task CNN by Neural Discriminative Dimensionality Reduction

State-of-the-art Convolutional Neural Network (CNN) benefits much from multi-task learning (MTL), which learns multiple related tasks simultaneously to obtain shared or mutually related representations for different tasks. The most widely used MTL CNN structure is based on an empirical or heuristic split on a specific layer (e.g., the last convolutional layer) to minimize multiple task-specific...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017